2,452 research outputs found

    NCBO Ontology Recommender 2.0: An Enhanced Approach for Biomedical Ontology Recommendation

    Get PDF
    Biomedical researchers use ontologies to annotate their data with ontology terms, enabling better data integration and interoperability. However, the number, variety and complexity of current biomedical ontologies make it cumbersome for researchers to determine which ones to reuse for their specific needs. To overcome this problem, in 2010 the National Center for Biomedical Ontology (NCBO) released the Ontology Recommender, which is a service that receives a biomedical text corpus or a list of keywords and suggests ontologies appropriate for referencing the indicated terms. We developed a new version of the NCBO Ontology Recommender. Called Ontology Recommender 2.0, it uses a new recommendation approach that evaluates the relevance of an ontology to biomedical text data according to four criteria: (1) the extent to which the ontology covers the input data; (2) the acceptance of the ontology in the biomedical community; (3) the level of detail of the ontology classes that cover the input data; and (4) the specialization of the ontology to the domain of the input data. Our evaluation shows that the enhanced recommender provides higher quality suggestions than the original approach, providing better coverage of the input data, more detailed information about their concepts, increased specialization for the domain of the input data, and greater acceptance and use in the community. In addition, it provides users with more explanatory information, along with suggestions of not only individual ontologies but also groups of ontologies. It also can be customized to fit the needs of different scenarios. Ontology Recommender 2.0 combines the strengths of its predecessor with a range of adjustments and new features that improve its reliability and usefulness. Ontology Recommender 2.0 recommends over 500 biomedical ontologies from the NCBO BioPortal platform, where it is openly available.Comment: 29 pages, 8 figures, 11 table

    Cluster, Classify, Regress: A General Method For Learning Discountinous Functions

    Full text link
    This paper presents a method for solving the supervised learning problem in which the output is highly nonlinear and discontinuous. It is proposed to solve this problem in three stages: (i) cluster the pairs of input-output data points, resulting in a label for each point; (ii) classify the data, where the corresponding label is the output; and finally (iii) perform one separate regression for each class, where the training data corresponds to the subset of the original input-output pairs which have that label according to the classifier. It has not yet been proposed to combine these 3 fundamental building blocks of machine learning in this simple and powerful fashion. This can be viewed as a form of deep learning, where any of the intermediate layers can itself be deep. The utility and robustness of the methodology is illustrated on some toy problems, including one example problem arising from simulation of plasma fusion in a tokamak.Comment: 12 files,6 figure

    Whole Genome Phylogenetic Tree Reconstruction Using Colored de Bruijn Graphs

    Full text link
    We present kleuren, a novel assembly-free method to reconstruct phylogenetic trees using the Colored de Bruijn Graph. kleuren works by constructing the Colored de Bruijn Graph and then traversing it, finding bubble structures in the graph that provide phylogenetic signal. The bubbles are then aligned and concatenated to form a supermatrix, from which a phylogenetic tree is inferred. We introduce the algorithms that kleuren uses to accomplish this task, and show its performance on reconstructing the phylogenetic tree of 12 Drosophila species. kleuren reconstructed the established phylogenetic tree accurately, and is a viable tool for phylogenetic tree reconstruction using whole genome sequences. Software package available at: https://github.com/Colelyman/kleurenComment: 6 pages, 3 figures, accepted at BIBE 2017. Minor modifications to the text due to reviewer feedback and fixed typo

    Learning the Language of Genes: Representing Global Codon Bias with Deep Language Models

    Get PDF
    Codon bias, the usage patterns of synonymous codons for encoding a protein sequence as nucleotides, is a biological phenomenon that is not well understood. Current methods that measure and model the codon bias of an organism exist for usage in codon optimization. In synthetic biology, codon optimization is a task the involves selecting the appropriate codons to reverse translate a protein sequence into a nucleotide sequence to maximize expression in a vector. These features include codon adaptation index (CAI) [1], individual codon usage (ICU), hidden stop codons (HSC) [2] and codon context (CC) [3]. While explicitly modeling these features has helped us to engineer high synthesis yield proteins, it is unclear what other biological features should be taken into account during codon selection for protein synthesis maximization. In this article, we present a method for modeling global codon bias through deep language models that is more robust than current methods by providing more contextual information and long-range dependencies to be considered during codon selection

    Inferring gene regulatory networks from asynchronous microarray data with AIRnet

    Get PDF
    Background Modern approaches to treating genetic disorders, cancers and even epidemics rely on a detailed understanding of the underlying gene signaling network. Previous work has used time series microarray data to infer gene signaling networks given a large number of accurate time series samples. Microarray data available for many biological experiments is limited to a small number of arrays with little or no time series guarantees. When several samples are averaged to examine differences in mean value between a diseased and normal state, information from individual samples that could indicate a gene relationship can be lost. Results Asynchronous Inference of Regulatory Networks (AIRnet) provides gene signaling network inference using more practical assumptions about the microarray data. By learning correlation patterns for the changes in microarray values from all pairs of samples, accurate network reconstructions can be performed with data that is normally available in microarray experiments. Conclusions By focussing on the changes between microarray samples, instead of absolute values, increased information can be gleaned from expression data

    PINK1 Is Selectively Stabilized on Impaired Mitochondria to Activate Parkin

    Get PDF
    Mutations in PINK1 or Parkin lead to familial parkinsonism. The authors suggest that PINK1 and Parkin form a pathway that senses damaged mitochondria and selectively targets them for degradation

    De novo Assembly of the Burying Beetle Nicrophorus orbicollis (Coleoptera: Silphidae) Transcriptome Across Developmental Stages with Identification of Key Immune Transcripts

    Get PDF
    Burying beetles (Nicrophorus spp.) are among the relatively few insects that provide parental care while not belonging to the eusocial insects such as ants or bees. This behavior incurs energy costs as evidenced by immune deficits and shorter life-spans in reproducing beetles. In the absence of an assembled transcriptome, relatively little is known concerning the molecular biology of these beetles. This work details the assembly and analysis of the Nicrophorus orbicollis transcriptome at multiple developmental stages. RNA-Seq reads were obtained by next-generation sequencing and the transcriptome was assembled using the Trinity assembler. Validation of the assembly was performed by functional characterization using Gene Ontology (GO), Eukaryotic Orthologous Groups (KOG), and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses. Differential expression analysis highlights developmental stage-specific expression patterns, and immunity-related transcripts are discussed. The data presented provides a valuable molecular resource to aid further investigation into immunocompetence throughout this organism’s sexual development

    The environmental and geomorphological impacts of historical gold mining in the Ohinemuri and Waihou river catchments, Coromandel, New Zealand

    Get PDF
    Between 1875 and 1955 approximately 250,000 Mg yr− 1 of mercury-, arsenic-, and cyanide-contaminated mine tailings were discharged directly into the Ohinemuri River and its tributaries, in the Coromandel Region, North Island, New Zealand. A devastating flood on 14 January 1907 deposited large amounts of mine waste across the floodplain of the Ohinemuri and Waihou rivers in the vicinity of the township of Paeroa. The 1907 mine-waste flood deposit was located as a dirty yellow silt in cores and floodplain profiles, with a thickness ranging from 0.15–0.50 m. Geochemical analysis of the mine waste shows elevated concentrations of Pb (~ 200–570 mg kg− 1) and As (~ 30–80 mg kg− 1), compared to early Holocene background concentrations (Pb < 30 mg kg− 1; As < 17 mg kg− 1). Bulk sediment samples recovered from the river channel and overbank deposits also show elevated concentrations of Pb (~ 110 mg kg− 1), Zn (~ 140–320 mg kg− 1), Ag (~ 3 mg kg− 1), and Hg (~ 0.4 mg kg− 1). Using the mine-waste deposit as a chronological marker shows that sedimentation rates increased from ~ 0.2 mm yr− 1 in the early Holocene, to 5.5–26.8 mm yr− 1 following the 1907 flood. Downstream trends in the thickness of the flood deposit show that local-scale geomorphic factors are a significant influence on the deposition of mine waste in such events. Storage of mine waste is greatest in the upstream reaches of the floodplain. The volume of mine waste estimated to be stored in the Ohinemuri floodplain is ~ 1.13 M m3, an order of magnitude larger than recent well-publicised tailings-dam failures, such as the 1996 South America Porco, 2000 Romanian Baia Mare and Baia Borsa accidents, and constituted, and was recognised at the time, a significant geomorphological and environmental event. The mine-waste material remains in the floodplain today, representing a sizable legacy store of contaminant metals and metalloids that pose a long-term risk to the Ohinemuri and Waihou ecosystems

    Steady-state modulation of voltage-gated K+ channels in rat arterial smooth muscle by cyclic AMP-dependent protein kinase and protein phosphatase 2B

    Get PDF
    Voltage-gated potassium channels (Kv) are important regulators of membrane potential in vascular smooth muscle cells, which is integral to controlling intracellular Ca2+ concentration and regulating vascular tone. Previous work indicates that Kv channels can be modulated by receptor-driven alterations of cyclic AMP-dependent protein kinase (PKA) activity. Here, we demonstrate that Kv channel activity is maintained by tonic activity of PKA. Whole-cell recording was used to assess the effect of manipulating PKA signalling on Kv and ATP-dependent K+ channels of rat mesenteric artery smooth muscle cells. Application of PKA inhibitors, KT5720 or H89, caused a significant inhibition of Kv currents. Tonic PKA-mediated activation of Kv appears maximal as application of isoprenaline (a β-adrenoceptor agonist) or dibutyryl-cAMP failed to enhance Kv currents. We also show that this modulation of Kv by PKA can be reversed by protein phosphatase 2B/calcineurin (PP2B). PKA-dependent inhibition of Kv by KT5720 can be abrogated by pre-treatment with the PP2B inhibitor cyclosporin A, or inclusion of a PP2B auto-inhibitory peptide in the pipette solution. Finally, we demonstrate that tonic PKA-mediated modulation of Kv requires intact caveolae. Pre-treatment of the cells with methyl-β-cyclodextrin to deplete cellular cholesterol, or adding caveolin-scaffolding domain peptide to the pipette solution to disrupt caveolae-dependent signalling each attenuated PKA-mediated modulation of the Kv current. These findings highlight a novel, caveolae-dependent, tonic modulatory role of PKA on Kv channels providing new insight into mechanisms and the potential for pharmacological manipulation of vascular tone

    'To live and die [for] Dixie': Irish civilians and the Confederate States of America

    Get PDF
    Around 20,000 Irishmen served in the Confederate army in the Civil War. As a result, they left behind, in various Southern towns and cities, large numbers of friends, family, and community leaders. As with native-born Confederates, Irish civilian support was crucial to Irish participation in the Confederate military effort. Also, Irish civilians served in various supporting roles: in factories and hospitals, on railroads and diplomatic missions, and as boosters for the cause. They also, however, suffered in bombardments, sieges, and the blockade. Usually poorer than their native neighbours, they could not afford to become 'refugees' and move away from the centres of conflict. This essay, based on research from manuscript collections, contemporary newspapers, British Consular records, and Federal military records, will examine the role of Irish civilians in the Confederacy, and assess the role this activity had on their integration into Southern communities. It will also look at Irish civilians in the defeat of the Confederacy, particularly when they came under Union occupation. Initial research shows that Irish civilians were not as upset as other whites in the South about Union victory. They welcomed a return to normalcy, and often 'collaborated' with Union authorities. Also, Irish desertion rates in the Confederate army were particularly high, and I will attempt to gauge whether Irish civilians played a role in this. All of the research in this paper will thus be put in the context of the Drew Gilpin Faust/Gary Gallagher debate on the influence of the Confederate homefront on military performance. By studying the Irish civilian experience one can assess how strong the Confederate national experiment was. Was it a nation without a nationalism
    corecore